-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reproduce write latency #18121
base: main
Are you sure you want to change the base?
reproduce write latency #18121
Conversation
Signed-off-by: Chao Chen <[email protected]>
Skipping CI for Draft Pull Request. |
@@ -61,31 +61,23 @@ func init() { | |||
} | |||
|
|||
func watchLatencyFunc(_ *cobra.Command, _ []string) { | |||
key := string(mustRandBytes(watchLKeySize)) | |||
key := "/registry/pods" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How this impacts the results? If it doesn't then please remove.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does not. Will remove.
Just to demonstrate that it could happen on pods, and it's a generic problem (not limited to events)
putReport.Results() <- report.Result{Start: start, End: end} | ||
putTimes[i] = end | ||
|
||
var putCount atomic.Uint64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you clarify what is the goal of the changes? From what I see you removed measuring watch latency, parallelized the puts, and you measure put latency. Maybe I'm just surprised you modified watch latency benchmark and not put one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry about that ==
I was meant to demonstrate that write latency could be impacted by slow syncWatcher
.
Hopefully we can find a balance between write and watch latency.
@@ -83,7 +83,7 @@ func NewReport(precision string) Report { return newReport(precision) } | |||
|
|||
func newReport(precision string) *report { | |||
r := &report{ | |||
results: make(chan Result, 16), | |||
results: make(chan Result, 65536), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How this impacts the results? If it doesn't then please remove.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does not. Will remove.
This is not a proper reproduce of the issue we have seen in the clusterloader2 test. The put latency is bounded by the disk IO which could be reflected by wal fsync latency metric. |
Please read https://github.com/etcd-io/etcd/blob/main/CONTRIBUTING.md#contribution-flow.
Related to #18109
Please run
In another terminal run
Result I got with multiple runs